W10B. Writing the Methodology Section

Author

Georgy Gelvanovsky

Published

March 31, 2026

1. Summary

1.1 Overview and Purpose

The Methodology section is the part of a research proposal in which you explain what you will do and why you will do it that way. Its central purpose is to convince the reader that your research design will properly address your research objective and that the selected methods will allow you to interpret your results in a meaningful way. A well-written Methodology section is specific, logically structured, and directly tied to the research objectives established earlier in the proposal.

Several principles govern effective methodology writing. Be specific: vague descriptions of methods give the reader no basis for evaluating your design. Write logically: the reader should be able to follow the chain of reasoning from your research question to your chosen approach without inferential gaps. Draw examples and justification from your Literature Review, because the choices you make in the Methodology must be grounded in precedents or gaps identified in existing scholarship. Use Present and Future tenses throughout, since you are describing a design that is planned but not yet completed.

1.2 Seven Questions for Designing a Methodology

Planning a sound methodology requires answering seven key questions. Addressing all of them ensures that your research design is coherent, feasible, and defensible.

1.2.1 Research Approach

The first question is which research approach to adopt: inductive or deductive.

Inductive research design moves from the specific to the general. It is appropriate when there is little or no existing literature on a topic, and when you need flexibility to adjust your approach as new evidence emerges during the study. The process proceeds through three stages: (1) a specific observation of a phenomenon, (2) pattern recognition in the data, and (3) the formation of a general conclusion or working theory.

Example: A researcher observing network performance over time collects data on instances of downtime (specific observation), notices patterns suggesting certain factors correlate with higher downtime rates (pattern recognition), and develops a theory about the causes of network failure (general conclusion).

Deductive research design always starts with an existing theory and moves from the general to the specific. It is used to test theories that were typically derived through prior inductive research. The process involves five stages: (1) identify an existing theory, (2) formulate a hypothesis derived from that theory, (3) collect data, (4) analyze the data, and (5) decide whether to accept or reject the hypothesis.

Example: Starting from the theory that remote IT employees are more productive than office workers, a researcher formulates the hypothesis “Remote employees are more productive,” collects productivity data from before and after employees transition to remote work, analyzes differences in performance, and either confirms or rejects the hypothesis based on statistical significance.

In practice, combining both approaches is common and often effective: begin with an inductive study to construct a working theory, then conduct deductive research to confirm or challenge that theory.

1.2.2 Practical Considerations

Before committing to a design, assess the practical constraints of your study. Key questions include:

  • How much time do you have to collect data, and how much time do you have to write up results?
  • Will you actually be able to access the data you need (e.g., company databases, participants, equipment)?
  • Do you have the research skills required—for instance, the ability to conduct statistical analysis or qualitative interviews?

Ignoring practical constraints produces a design that is theoretically sound but impossible to execute.

1.2.3 Data Collection Methods

The third question concerns how you will collect data. The two primary categories are:

  • Surveys, which include interviews (conducted in person, using open-ended questions that allow detailed responses) and questionnaires (distributed to participants, typically using closed questions that produce structured, comparable answers).
  • Observations, which may be quantitative (systematically counting or measuring specific behaviors or outcomes) or qualitative (producing detailed field notes and rich descriptions of a situation or process).

Your choice of data collection method must match your research question. Open-ended phenomena that require interpretation call for qualitative approaches; hypotheses that require statistical testing call for quantitative ones.

1.2.4 Use of Secondary Data

Secondary data is data collected by someone other than the researcher. It may be appropriate when you lack the time or resources to collect primary data. Secondary data sources include published datasets, institutional records, government statistics, and prior research corpora. The analytical contribution of your study lies in examining these data from a new perspective or applying them to a new question.

1.2.5 Sampling Method

Because studying an entire population (all members of the group of interest) is rarely feasible, researchers work with a sample—a subset of the population selected to represent it. The method by which the sample is selected affects the validity and generalizability of your findings.

There are two broad categories of sampling:

Probability sampling methods give every member of the population a known, non-zero chance of being selected:

  • Simple random sample: individuals are selected entirely at random.
  • Systematic sample: individuals are selected at regular intervals (e.g., every third person on a list).
  • Stratified sample: the population is divided into meaningful subgroups, and individuals are sampled proportionally from each group.
  • Cluster sample: the population is divided into clusters; entire clusters are randomly selected and all individuals within them are included.

Non-probability sampling methods do not give every population member an equal chance of selection. They are faster and cheaper but less generalizable:

  • Convenience sample: selecting whoever is most easily accessible to the researcher.
  • Purposive sample: deliberately selecting individuals who meet specific criteria relevant to the research question.
  • Snowball sample: initial participants recruit further participants from their networks, useful for hard-to-reach populations.
  • Quota sample: selecting a fixed number of individuals from each subgroup to meet predefined quotas.
1.2.6 Sample Size

There is no universal answer to the question of how large a sample should be. Appropriate sample size in IT research depends on the research objectives, the level of statistical precision required, the variability of the population, and the sampling method used. When in doubt, use a sample size calculator, which can determine the minimum adequate sample size given your desired confidence level and margin of error.

1.2.7 Data Analysis

The final question concerns how you will analyze the data you collect.

Quantitative analysis involves collecting, processing, and interpreting numerical data. Statistical methods allow you to summarize patterns in your sample, estimate population parameters, and test hypotheses with defined levels of confidence.

Qualitative analysis involves interpreting non-numerical data—text, interview transcripts, observational notes, or images. The goal is to identify recurring patterns, themes, or meanings, and to extract the elements most directly relevant to your research question.

In mixed-methods designs, both types of analysis can be combined: quantitative methods may establish the scope or prevalence of a phenomenon while qualitative methods explain the mechanisms or experiences behind it.

1.3 Explaining Your Choices

A common mistake in writing the Methodology is to list approaches and methods without justifying them. The Methodology section is not a menu of techniques—it is an argument. For every element of your research design, clearly explain why it is the most appropriate choice given your specific research problem. If you select a deductive approach, explain what existing theory you are testing and why testing that theory matters. If you choose a convenience sample, acknowledge it and explain why a probability sample is not feasible in your context.

Justification demonstrates that your design choices are deliberate and informed rather than arbitrary.

1.4 Acknowledging Limitations

Every research design has limitations: constraints on generalizability, potential sources of bias, or practical boundaries that prevent a fully exhaustive investigation. Foresee and acknowledge these limitations explicitly in your Methodology. This is not a sign of weakness—it is a sign of scholarly rigor. For each limitation you identify, provide a plan of action: explain how you will mitigate the limitation or why the study remains valuable despite it. A reader who sees that you have thought carefully about the boundaries of your work is far more likely to trust your conclusions.